首页> 外文OA文献 >Gradual Release of Sensitive Data under Differential Privacy
【2h】

Gradual Release of Sensitive Data under Differential Privacy

机译:差分隐私下敏感数据的逐步发布

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

We introduce the problem of releasing sensitive data under differentialprivacy when the privacy level is subject to change over time. Existing workassumes that privacy level is determined by the system designer as a fixedvalue before sensitive data is released. For certain applications, however,users may wish to relax the privacy level for subsequent releases of the samedata after either a re-evaluation of the privacy concerns or the need forbetter accuracy. Specifically, given a database containing sensitive data, weassume that a response $y_1$ that preserves $\epsilon_{1}$-differential privacyhas already been published. Then, the privacy level is relaxed to $\epsilon_2$,with $\epsilon_2 > \epsilon_1$, and we wish to publish a more accurate response$y_2$ while the joint response $(y_1, y_2)$ preserves $\epsilon_2$-differentialprivacy. How much accuracy is lost in the scenario of gradually releasing tworesponses $y_1$ and $y_2$ compared to the scenario of releasing a singleresponse that is $\epsilon_{2}$-differentially private? Our results show thatthere exists a composite mechanism that achieves \textit{no loss} in accuracy.We consider the case in which the private data lies within $\mathbb{R}^{n}$with an adjacency relation induced by the $\ell_{1}$-norm, and we focus onmechanisms that approximate identity queries. We show that the same accuracycan be achieved in the case of gradual release through a mechanism whoseoutputs can be described by a \textit{lazy Markov stochastic process}. Thisstochastic process has a closed form expression and can be efficiently sampled.Our results are applicable beyond identity queries. To this end, we demonstratethat our results can be applied in several cases, including Google's RAPPORproject, trading of sensitive data, and controlled transmission of private datain a social network.
机译:当隐私级别随时间变化时,我们引入了在差异隐私下释放敏感数据的问题。现有工作假设在敏感数据发布之前,系统设计者将隐私级别确定为固定值。但是,对于某些应用程序,用户可能希望在重新评估隐私问题或需要更好的准确性之后放宽隐私级别,以便随后发布相同的数据。具体来说,给定一个包含敏感数据的数据库,我们假设已经发布了保存$ \ epsilon_ {1} $差异隐私的响应$ y_1 $。然后,将隐私级别放宽到$ \ epsilon_2 $,其中$ \ epsilon_2> \ epsilon_1 $,我们希望发布更准确的响应$ y_2 $,而联合响应$(y_1,y_2)$保留$ \ epsilon_2 $ -differentialprivacy。与发布$ \ epsilon_ {2} $差分私有的单个响应的情况相比,逐渐释放两个响应$ y_1 $和$ y_2 $的情况损失了多少精度?我们的结果表明,存在一种能够实现\ textit {no loss}准确性的复合机制。我们考虑了以下情况:私有数据位于$ \ mathbb {R} ^ {n} $内,并具有由$ \引起的邻接关系ell_ {1} $-规范,我们将重点放在近似身份查询的机制上。我们表明,在逐步释放的情况下,可以通过一种机制来实现相同的准确性,该机制的输出可以通过\ textit {lazy Markov stochastic process}来描述。此随机过程具有封闭形式的表达式,可以有效地进行抽样。我们的结果适用于身份查询以外的其他情况。为此,我们证明了我们的结果可以在几种情况下应用,包括Google的RAPPORproject,敏感数据交易以及在社交网络中受控传输私人数据。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号